Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.938
Filtrar
1.
J Neural Eng ; 20(6)2024 01 04.
Artigo em Inglês | MEDLINE | ID: mdl-38134446

RESUMO

Objective.Surface electromyography pattern recognition (sEMG-PR) is considered as a promising control method for human-machine interaction systems. However, the performance of a trained classifier would greatly degrade for novel users since sEMG signals are user-dependent and largely affected by a number of individual factors such as the quantity of subcutaneous fat and the skin impedance.Approach.To solve this issue, we proposed a novel unsupervised cross-individual motion recognition method that aligned sEMG features from different individuals by self-adaptive dimensional dynamic distribution adaptation (SD-DDA) in this study. In the method, both the distances of marginal and conditional distributions between source and target features were minimized through automatically selecting the optimal feature domain dimension by using a small amount of unlabeled target data.Main results.The effectiveness of the proposed method was tested on four different feature sets, and results showed that the average classification accuracy was improved by above 10% on our collected dataset with the best accuracy reached 90.4%. Compared to six kinds of classic transfer learning methods, the proposed method showed an outstanding performance with improvements of 3.2%-13.8%. Additionally, the proposed method achieved an approximate 9% improvement on a publicly available dataset.Significance.These results suggested that the proposed SD-DDA method is feasible for cross-individual motion intention recognition, which would provide help for the application of sEMG-PR based system.


Assuntos
Algoritmos , Gestos , Humanos , Reconhecimento Automatizado de Padrão/métodos , Eletromiografia/métodos , Sistemas Homem-Máquina
2.
Ergonomics ; 66(11): 1730-1749, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37139680

RESUMO

Given that automation complacency, a hitherto controversial concept, is already used to blame and punish human drivers in current accident investigations and courts, it is essential to map complacency research in driving automation and determine whether current research can support its legitimate usage in these practical fields. Here, we reviewed its status quo in the domain and conducted a thematic analysis. We then discussed five fundamental challenges that might undermine its scientific legitimation: conceptual confusion exists in whether it is an individual versus systems problem; uncertainties exist in current evidence of complacency; valid measures specific to complacency are lacking; short-term laboratory experiments cannot address the long-term nature of complacency and thus their findings may lack external validity; and no effective interventions directly target complacency prevention. The Human Factors/Ergonomics community has a responsibility to minimise its usage and defend human drivers who rely on automation that is far from perfect.Practitioner summary: Human drivers are accused of complacency and overreliance on driving automation in accident investigations and courts. Our review work shows that current academic research in the driving automation domain cannot support its legitimate usage in these practical fields. Its misuse will create a new form of consumer harms.


Assuntos
Condução de Veículo , Comportamento Social , Humanos , Automação , Ergonomia , Sistemas Homem-Máquina , Acidentes de Trânsito/prevenção & controle
3.
Appl Ergon ; 111: 104027, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37100010

RESUMO

Although automation is employed as an aid to human performance, operators often interact with automated decision aids inefficiently. The current study investigated whether anthropomorphic automation would engender higher trust and use, subsequently improving human-automation team performance. Participants performed a multi-element probabilistic signal detection task in which they diagnosed a hypothetical nuclear reactor as in a state of safety or danger. The task was completed unassisted and assisted by a 93%-reliable agent varying in anthropomorphism. Results gave no evidence that participants' perceptions of anthropomorphism differed between conditions. Further, anthropomorphic automation failed to bolster trust and automation-aided performance. Findings suggest that the benefits of anthropomorphism may be limited in some contexts.


Assuntos
Análise e Desempenho de Tarefas , Confiança , Humanos , Automação , Sistemas Homem-Máquina
4.
IEEE Trans Cybern ; 53(12): 7483-7496, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37015459

RESUMO

This article presents a systematic review on wearable robotic devices that use human-in-the-loop optimization (HILO) strategies to improve human-robot interaction. A total of 46 HILO studies were identified and divided into upper and lower limb robotic devices. The main aspects from HILO were identified, reviewed, and classified in four areas: 1) human-machine systems; 2) optimization methods; 3) control strategies; and 4) experimental protocols. A variety of objective functions (physiological, biomechanical, and subjective), optimization strategies, and optimized control parameters configurations used in different control strategies are presented and analyzed. An overview of experimental protocols is provided, including metrics, tasks, and conditions tested. Moreover, the relevance given to training or adaptation periods was explored. We outline an HILO framework that includes current wearable robots, optimization strategies, objective functions, control strategies, and experimental protocols. We conclude by highlighting current research gaps and defining future directions to improve the development of advanced HILO strategies in upper and lower limb wearable robots.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Extremidade Inferior/fisiologia , Sistemas Homem-Máquina
5.
Appl Ergon ; 110: 104022, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37019048

RESUMO

Automated decision aids typically improve decision-making, but incorrect advice risks automation misuse or disuse. We examined the novel question of whether increased automation transparency improves the accuracy of automation use under conditions with/without concurrent (non-automated assisted) task demands. Participants completed an uninhabited vehicle (UV) management task whereby they assigned the best UV to complete missions. Automation advised the best UV but was not always correct. Concurrent non-automated task demands decreased the accuracy of automation use, and increased decision time and perceived workload. With no concurrent task demands, increased transparency which provided more information on how the automation made decisions, improved the accuracy of automation use. With concurrent task demands, increased transparency led to higher trust ratings, faster decisions, and a bias towards agreeing with automation. These outcomes indicate increased reliance on highly transparent automation under conditions with concurrent task demands and have potential implications for human-automation teaming design.


Assuntos
Análise e Desempenho de Tarefas , Carga de Trabalho , Humanos , Automação , Confiança , Viés , Sistemas Homem-Máquina
7.
Sci Rep ; 13(1): 2995, 2023 02 21.
Artigo em Inglês | MEDLINE | ID: mdl-36810767

RESUMO

Positive human-agent relationships can effectively improve human experience and performance in human-machine systems or environments. The characteristics of agents that enhance this relationship have garnered attention in human-agent or human-robot interactions. In this study, based on the rule of the persona effect, we study the effect of an agent's social cues on human-agent relationships and human performance. We constructed a tedious task in an immersive virtual environment, designing virtual partners with varying levels of human likeness and responsiveness. Human likeness encompassed appearance, sound, and behavior, while responsiveness referred to the way agents responded to humans. Based on the constructed environment, we present two studies to explore the effects of an agent's human likeness and responsiveness to agents on participants' performance and perception of human-agent relationships during the task. The results indicate that when participants work with an agent, its responsiveness attracts attention and induces positive feelings. Agents with responsiveness and appropriate social response strategies have a significant positive effect on human-agent relationships. These results shed some light on how to design virtual agents to improve user experience and performance in human-agent interactions.


Assuntos
Atenção , Emoções , Humanos , Sistemas Homem-Máquina
8.
Sensors (Basel) ; 23(3)2023 Jan 27.
Artigo em Inglês | MEDLINE | ID: mdl-36772464

RESUMO

Designing human-machine interactive systems requires cooperation between different disciplines is required. In this work, we present a Dialogue Manager and a Language Generator that are the core modules of a Voice-based Spoken Dialogue System (SDS) capable of carrying out challenging, long and complex coaching conversations. We also develop an efficient integration procedure of the whole system that will act as an intelligent and robust Virtual Coach. The coaching task significantly differs from the classical applications of SDSs, resulting in a much higher degree of complexity and difficulty. The Virtual Coach has been successfully tested and validated in a user study with independent elderly, in three different countries with three different languages and cultures: Spain, France and Norway.


Assuntos
Comunicação , Idioma , Humanos , Idoso , Sistemas Homem-Máquina , Veículos Automotores , França
9.
J Neural Eng ; 20(1)2023 01 18.
Artigo em Inglês | MEDLINE | ID: mdl-36595316

RESUMO

Objective.Error-related potential (ErrP) is a potential elicited in the brain when humans perceive an error. ErrPs have been researched in a variety of contexts, such as to increase the reliability of brain-computer interfaces (BCIs), increase the naturalness of human-machine interaction systems, teach systems, as well as study clinical conditions. Still, there is a significant challenge in detecting ErrP from a single trial, which may hamper its effective use. The literature presents ErrP detection accuracies quite variable across studies, which raises the question of whether this variability depends more on classification pipelines or on the quality of elicited ErrPs (mostly directly related to the underlying paradigms).Approach.With this purpose, 11 datasets have been used to compare several classification pipelines which were selected according to the studies that reported online performance above 75%. We also analyze the effects of different steps of the pipelines, such as resampling, window selection, augmentation, feature extraction, and classification.Main results.From our analysis, we have found that shrinkage-regularized linear discriminant analysis is the most robust method for classification, and for feature extraction, using Fisher criterion beamformer spatial features and overlapped window averages result in better classification performance. The overall experimental results suggest that classification accuracy is highly dependent on user tasks in BCI experiments and on signal quality (in terms of ErrP morphology, signal-to-noise ratio (SNR), and discrimination).Significance.This study contributes to the BCI research field by responding to the need for a guideline that can direct researchers in designing ErrP-based BCI tasks by accelerating the design steps.


Assuntos
Interfaces Cérebro-Computador , Humanos , Eletroencefalografia/métodos , Reprodutibilidade dos Testes , Encéfalo , Sistemas Homem-Máquina , Algoritmos
10.
Appl Ergon ; 108: 103961, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36640742

RESUMO

The purpose of this study was to 1) examine whether frequency of positive and negative interactions (manipulated via reliability) with a computer agent had an impact on an individual's trust resilience after a major error occurs and 2) empirically test the notion of relationship equity, which encompasses the total accumulation of positive and negative interactions and experiences between two actors, on user trust on a separate transfer task. Participants were randomized into one of four groups, differing in agent positivity and frequency of interaction, and completed both a pattern recognition task and transfer task with the aid of the same computer agent. Subjective trust ratings, performance data, compliance, and agreement were collected and analyzed. Results demonstrated that frequency of positive and negative interactions did have an impact on user trust and trust resilience after a major error. Additionally, it was shown that relationship equity has an impact on user trust and trust resilience. This is the first empirical demonstration of relationship equity's impact on user trust in an automated teammate.


Assuntos
Computadores , Confiança , Humanos , Reprodutibilidade dos Testes , Automação , Sistemas Homem-Máquina
11.
Artigo em Inglês | MEDLINE | ID: mdl-36673940

RESUMO

In the manufacturing environments of today, human-machine systems are constituted with complex and advanced technology, which demands workers' considerable mental workload. This work aims to design and evaluate a Graphical User Interface developed to induce mental workload based on Dual N-Back tasks for further analysis of human performance. This study's contribution lies in developing proper cognitive analyses of the graphical user interface, identifying human error when the Dual N-Back tasks are presented in an interface, and seeking better user-system interaction. Hierarchical task analysis and the Task Analysis Method for Error Identification were used for the cognitive analysis. Ten subjects participated voluntarily in the study, answering the NASA-TLX questionnaire at the end of the task. The NASA-TLX results determined the subjective participants' mental workload proving that the subjects were induced to different levels of mental workload (Low, Medium, and High) based on the ANOVA statistical results using the mean scores obtained and cognitive analysis identified redesign opportunities for graphical user interface improvement.


Assuntos
Análise e Desempenho de Tarefas , Carga de Trabalho , Humanos , Carga de Trabalho/psicologia , Sistemas Homem-Máquina , Inquéritos e Questionários , Cognição
12.
Int J Occup Saf Ergon ; 29(2): 855-862, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35658817

RESUMO

The rational design of the alarm signal in the man-machine system is an important factor in determining the occurrence of safety accidents. Neuroergonomics provides a new perspective for the study of the cognitive process of alarm signals, which can reveal the mechanism of human perception of visual alarm signals from the cognitive level of the brain, thereby identifying the effectiveness of alarm signals. This study simulates the new energy vehicle cooling man-machine system, uses the automatic control interface of the test cooling water system as the stimulation material, and uses the event-related potential technology in cognitive neuroscience to conduct experimental verification. The experimental results showed that: three kinds of alarm signals (color , color+shape, color+orientation) all induce visual mismatch waves, and the effective response of human to the alarm signal is color+orientation, color+shape, color from small to large, which provides a reference for the design of alarm signal.


Assuntos
Cognição , Sistemas Homem-Máquina , Humanos
13.
Hum Factors ; 65(5): 862-878, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-34459266

RESUMO

OBJECTIVE: We examine how human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. BACKGROUND: Most existing studies measured trust by administering questionnaires at the end of an experiment. Only a limited number of studies viewed trust as a dynamic variable that can strengthen or decay over time. METHOD: Seventy-five participants took part in an aided memory recognition task. In the task, participants viewed a series of images and later on performed 40 trials of the recognition task to identify a target image when it was presented with a distractor. In each trial, participants performed the initial recognition by themselves, received a recommendation from an automated decision aid, and performed the final recognition. After each trial, participants reported their trust on a visual analog scale. RESULTS: Outcome bias and contrast effect significantly influence human operators' trust adjustments. An automation failure leads to a larger trust decrement if the final outcome is undesirable, and a marginally larger trust decrement if the human operator succeeds the task by him/herself. An automation success engenders a greater trust increment if the human operator fails the task. Additionally, automation failures have a larger effect on trust adjustment than automation successes. CONCLUSION: Human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Their trust adjustments are significantly influenced by decision-making heuristics/biases. APPLICATION: Understanding the trust adjustment process enables accurate prediction of the operators' moment-to-moment trust in automation and informs the design of trust-aware adaptive automation.


Assuntos
Análise e Desempenho de Tarefas , Confiança , Humanos , Masculino , Automação , Conscientização , Heurística , Sistemas Homem-Máquina
14.
Ergonomics ; 66(2): 217-226, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35451925

RESUMO

Previous research has suggested that supervising automation can lead to a decrease in human performance, especially when automation is not totally reliable. Providing context-related information about reliability can help operators to better adjust their behaviour in a human-automation interaction context. However, previous studies have not specified the level of accuracy that this information should provide to the human operator. The objective of this study was to investigate the effects of different levels of information accuracy about an automation's reliability on human performance. Results showed that accuracy of information about reliability improves performance when specific percentages of reliability were given to the participants. Participants had a better performance in the condition of high accuracy of information. A link between perceived reliability and trust was found: the more the trust in automation increased, the more the perceived reliability increased.Practitioner summary: The experiment dealt with how accurate information about automation's reliability influences people's performance when supervising an automated task. Overall, this research suggests that designing systems that provide accurate, useful information can reduce the frequency of automation bias. Trust and perceived reliability of automation are related.Abbreviations: MATB: multi attribute task battery.


Assuntos
Análise e Desempenho de Tarefas , Confiança , Humanos , Reprodutibilidade dos Testes , Automação , Fontes de Energia Elétrica , Sistemas Homem-Máquina
15.
Hum Factors ; 65(8): 1596-1612, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34979821

RESUMO

OBJECTIVE: Examine (1) the extent to which humans can accurately estimate automation reliability and calibrate to changes in reliability, and how this is impacted by the recent accuracy of automation; and (2) factors that impact the acceptance of automated advice, including true automation reliability, reliability perception, and the difference between an operator's perception of automation reliability and perception of their own reliability. BACKGROUND: Existing evidence suggests humans can adapt to changes in automation reliability but generally underestimate reliability. Cognitive science indicates that humans heavily weight evidence from more recent experiences. METHOD: Participants monitored the behavior of maritime vessels (contacts) in order to classify them, and then received advice from automation regarding classification. Participants were assigned to either an initially high (90%) or low (60%) automation reliability condition. After some time, reliability switched to 75% in both conditions. RESULTS: Participants initially underestimated automation reliability. After the change in true reliability, estimates in both conditions moved towards the common true reliability, but did not reach it. There were recency effects, with lower future reliability estimates immediately following incorrect automation advice. With lower initial reliability, automation acceptance rates tracked true reliability more closely than perceived reliability. A positive difference between participant assessments of the reliability of automation and their own reliability predicted greater automation acceptance. CONCLUSION: Humans underestimate the reliability of automation, and we have demonstrated several critical factors that impact the perception of automation reliability and automation use. APPLICATION: The findings have potential implications for training and adaptive human-automation teaming.


Assuntos
Sistemas Homem-Máquina , Percepção , Humanos , Reprodutibilidade dos Testes , Automação
16.
Hum Factors ; 65(5): 846-861, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-34340583

RESUMO

OBJECTIVE: Examine the effects of decision risk and automation transparency on the accuracy and timeliness of operator decisions, automation verification rates, and subjective workload. BACKGROUND: Decision aids typically benefit performance, but can provide incorrect advice due to contextual factors, creating the potential for automation disuse or misuse. Decision aids can reduce an operator's manual problem evaluation, and it can also be strategic for operators to minimize verifying automated advice in order to manage workload. METHOD: Participants assigned the optimal unmanned vehicle to complete missions. A decision aid provided advice but was not always reliable. Two levels of decision aid transparency were manipulated between participants. The risk associated with each decision was manipulated using a financial incentive scheme. Participants could use a calculator to verify automated advice; however, this resulted in a financial penalty. RESULTS: For high- compared with low-risk decisions, participants were more likely to reject incorrect automated advice and were more likely to verify automation and reported higher workload. Increased transparency did not lead to more accurate decisions and did not impact workload but decreased automation verification and eliminated the increased decision time associated with high decision risk. CONCLUSION: Increased automation transparency was beneficial in that it decreased automation verification and decreased decision time. The increased workload and automation verification for high-risk missions is not necessarily problematic given the improved automation correct rejection rate. APPLICATION: The findings have potential application to the design of interfaces to improve human-automation teaming, and for anticipating the impact of decision risk on operator behavior.


Assuntos
Análise e Desempenho de Tarefas , Carga de Trabalho , Humanos , Automação , Sistemas Homem-Máquina
17.
Hum Factors ; 65(4): 546-561, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-34348511

RESUMO

OBJECTIVE: Assess performance, trust, and visual attention during the monitoring of a near-perfect automated system. BACKGROUND: Research rarely attempts to assess performance, trust, and visual attention in near-perfect automated systems even though they will be relied on in high-stakes environments. METHODS: Seventy-three participants completed a 40-min supervisory control task where they monitored three search feeds. All search feeds were 100% reliable with the exception of two automation failures: one miss and one false alarm. Eye-tracking and subjective trust data were collected. RESULTS: Thirty-four percent of participants correctly identified the automation miss, and 67% correctly identified the automation false alarm. Subjective trust increased when participants did not detect the automation failures and decreased when they did. Participants who detected the false alarm had a more complex scan pattern in the 2 min centered around the automation failure compared with those who did not. Additionally, those who detected the failures had longer dwell times in and transitioned to the center sensor feed significantly more often. CONCLUSION: Not only does this work highlight the limitations of the human when monitoring near-perfect automated systems, it begins to quantify the subjective experience and attentional cost of the human. It further emphasizes the need to (1) reevaluate the role of the operator in future high-stakes environments and (2) understand the human on an individual level and actively design for the given individual when working with near-perfect automated systems. APPLICATION: Multiple operator-level measures should be collected in real-time in order to monitor an operator's state and leverage real-time, individualized assistance.


Assuntos
Análise e Desempenho de Tarefas , Confiança , Humanos , Automação , Idioma , Sistemas Homem-Máquina
18.
Hum Factors ; 65(4): 533-545, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-34375538

RESUMO

OBJECTIVE: Examine the impact of expected automation reliability on trust, workload, task disengagement, nonautomated task performance, and the detection of a single automation failure in simulated air traffic control. BACKGROUND: Prior research has focused on the impact of experienced automation reliability. However, many operational settings feature automation that is reliable to the extent that operators will seldom experience automation failures. Despite this, operators must remain aware of when automation is at greater risk of failing. METHOD: Participants performed the task with or without conflict detection/resolution automation. Automation failed to detect/resolve one conflict (i.e., an automation miss). Expected reliability was manipulated via instructions such that the expected level of reliability was (a) constant or variable, and (b) the single automation failure occurred when expected reliability was high or low. RESULTS: Trust in automation increased with time on task prior to the automation failure. Trust was higher when expecting high relative to low reliability. Automation failure detection was improved when the failure occurred under low compared with high expected reliability. Subjective workload decreased with automation, but there was no improvement to nonautomated task performance. Automation increased perceived task disengagement. CONCLUSIONS: Both automation reliability expectations and task experience played a role in determining trust. Automation failure detection was improved when the failure occurred at a time it was expected to be more likely. Participants did not effectively allocate any spared capacity to nonautomated tasks. APPLICATIONS: The outcomes are applicable because operators in field settings likely form contextual expectations regarding the reliability of automation.


Assuntos
Aviação , Análise e Desempenho de Tarefas , Humanos , Reprodutibilidade dos Testes , Carga de Trabalho , Automação , Sistemas Homem-Máquina
19.
Ergonomics ; 66(2): 291-302, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35583421

RESUMO

Consumer automation is a suitable venue for studying the efficacy of untested humanness design methods for promoting specific trust in multi-component systems. Subjective (trust, self-confidence) and behavioural (use, manual override) measures were recorded as 82 participants interacted with a four-component automation-bearing system in a simulated smart home task for two experimental blocks. During the first block all components were perfectly reliable (100%). During the second block, one component became unreliable (60%). Participants interacted with a system containing either a single or four simulated voice assistants. In the single-assistant condition, the unreliable component resulted in trust changes for every component. In the four-assistant condition, trust decreased for only the unreliable component. Across agent-number conditions, use decreased between blocks for only the unreliable component. Self-confidence and overrides exhibited ceiling and floor effects, respectively. Our findings provide the first evidence of effectively using humanness design to enhance component-specific trust in consumer systems.Practitioner summary: Participants interacted with simulated smart-home multi-component systems that contained one or four voiced assistants. In the single-voice condition, one component's decreasing reliability coincided with trust changes for all components. In the four-voice condition, trust decreased for only the decreasingly reliable component. The number of voices did not influence use strategies.Abbreviations: ACC: adaptive cruise control; CST: component-specific trust; SWT: system-wide trust; UAV: unmanned aerial vehicle; CPRS: complacency potential rating scale; MANOVA: multivariate analysis of variance.


Assuntos
Análise e Desempenho de Tarefas , Confiança , Humanos , Reprodutibilidade dos Testes , Sistemas Homem-Máquina , Automação
20.
Hum Factors ; 65(1): 137-165, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-33906505

RESUMO

OBJECTIVE: This paper reviews recent articles related to human trust in automation to guide research and design for increasingly capable automation in complex work environments. BACKGROUND: Two recent trends-the development of increasingly capable automation and the flattening of organizational hierarchies-suggest a reframing of trust in automation is needed. METHOD: Many publications related to human trust and human-automation interaction were integrated in this narrative literature review. RESULTS: Much research has focused on calibrating human trust to promote appropriate reliance on automation. This approach neglects relational aspects of increasingly capable automation and system-level outcomes, such as cooperation and resilience. To address these limitations, we adopt a relational framing of trust based on the decision situation, semiotics, interaction sequence, and strategy. This relational framework stresses that the goal is not to maximize trust, or to even calibrate trust, but to support a process of trusting through automation responsivity. CONCLUSION: This framing clarifies why future work on trust in automation should consider not just individual characteristics and how automation influences people, but also how people can influence automation and how interdependent interactions affect trusting automation. In these new technological and organizational contexts that shift human operators to co-operators of automation, automation responsivity and the ability to resolve conflicting goals may be more relevant than reliability and reliance for advancing system design. APPLICATION: A conceptual model comprising four concepts-situation, semiotics, strategy, and sequence-can guide future trust research and design for automation responsivity and more resilient human-automation systems.


Assuntos
Sistemas Homem-Máquina , Confiança , Humanos , Reprodutibilidade dos Testes , Automação , Motivação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...